High Intrinsic Dimensionality Facilitates Adversarial Attack: Theoretical Evidence

نویسندگان

چکیده

Machine learning systems are vulnerable to adversarial attack. By applying the input object a small, carefully-designed perturbation, classifier can be tricked into making an incorrect prediction. This phenomenon has drawn wide interest, with many attempts made explain it. However, complete understanding is yet emerge. In this paper we adopt slightly different perspective, still relevant classification. We consider retrieval, where output set of objects most similar user-supplied query object, corresponding k-nearest neighbors. investigate effect perturbation on ranking respect query. Through theoretical analysis, supported by experiments, demonstrate that as intrinsic dimensionality data domain rises, amount required subvert neighborhood rankings diminishes, and vulnerability attack rises. examine two modes query: either `closer' target point, or `farther' from also perspectives: `query-centric', examining query's own ranking, `target-centric', considering point in target's set. All four cases correspond practical scenarios involving classification retrieval.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. We tackle this challenge by charact...

متن کامل

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. In particular, effective measures a...

متن کامل

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, the properties of subspaces in the neighborhood of adversarial examples need to be characterized. In particular, effective measures are required to discriminate adversa...

متن کامل

On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples

Understanding and characterizing the subspaces of adversarial examples aid in studying the robustness of deep neural networks (DNNs) to adversarial perturbations. Very recently, Ma et al. (2018) proposed to use local intrinsic dimensionality (LID) in layer-wise hidden representations of DNNs to study adversarial subspaces. It was demonstrated that LID can be used to characterize the adversarial...

متن کامل

Estimation of Intrinsic Dimensionality Using High-Rate Vector Quantization

We introduce a technique for dimensionality estimation based on the notion of quantization dimension, which connects the asymptotic optimal quantization error for a probability distribution on a manifold to its intrinsic dimension. The definition of quantization dimension yields a family of estimation algorithms, whose limiting case is equivalent to a recent method based on packing numbers. Usi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Forensics and Security

سال: 2021

ISSN: ['1556-6013', '1556-6021']

DOI: https://doi.org/10.1109/tifs.2020.3023274